feat(aiPlatformConfiguration): add contextMemory tuning block#27455
feat(aiPlatformConfiguration): add contextMemory tuning block#27455
Conversation
Expose T0 user-preference memory retrieval limits (tokenBudget, maxItems) via aiPlatformConfiguration.contextMemory so deployments can tune the block that Collate ships to the AI Platform via the gRPC user_memory_context field without code changes. Both fields default to the values previously hard-coded in Collate's ContextMemoryRetrievalService (500 tokens, 5 items), so existing deployments behave identically. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Adds an optional contextMemory sub-configuration to the AI Platform configuration schema so deployments can tune how much T0 “user preference” memory Collate emits via the gRPC user_memory_context field.
Changes:
- Introduces a
contextMemoryConfigurationschema definition withtokenBudgetandmaxItems(defaults 500 / 5). - Adds a new
contextMemoryproperty toAiPlatformConfigurationreferencing the new definition.
✅ TypeScript Types Auto-UpdatedThe generated TypeScript types have been automatically updated based on JSON schema changes in this PR. |
Code Review ✅ ApprovedIntroduces the contextMemory tuning block to the AI platform configuration. No issues were found. OptionsDisplay: compact → Showing less information. Comment with these commands to change:
Was this helpful? React with 👍 / 👎 | Gitar |
|
|
🔴 Playwright Results — 1 failure(s), 23 flaky✅ 3639 passed · ❌ 1 failed · 🟡 23 flaky · ⏭️ 111 skipped
Genuine Failures (failed on all attempts)❌
|
* feat(aiPlatformConfiguration): add contextMemory tuning block Expose T0 user-preference memory retrieval limits (tokenBudget, maxItems) via aiPlatformConfiguration.contextMemory so deployments can tune the block that Collate ships to the AI Platform via the gRPC user_memory_context field without code changes. Both fields default to the values previously hard-coded in Collate's ContextMemoryRetrievalService (500 tokens, 5 items), so existing deployments behave identically. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * Update generated TypeScript types --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>



Summary
Adds a
contextMemorysub-config toaiPlatformConfigurationso deployments can tune the T0 user-preference memory block that Collate ships to the AI Platform via the gRPCuser_memory_contextfield without code changes.Both fields default to the values previously hard-coded in Collate's `ContextMemoryRetrievalService` (500 tokens / 5 items), so existing deployments behave identically.
This is paired with the Collate PR that reads this config at service construction time and passes it into `ContextMemoryRetrievalService`.
Test plan
🤖 Generated with Claude Code